207 research outputs found

    Computation of the inverse Laplace Transform based on a Collocation method which uses only real values

    Get PDF
    We develop a numerical algorithm for inverting a Laplace transform (LT), based on Laguerre polynomial series expansion of the inverse function under the assumption that the LT is known on the real axis only. The method belongs to the class of Collocation methods (C-methods), and is applicable when the LT function is regular at infinity. Difficulties associated with these problems are due to their intrinsic ill-posedness. The main contribution of this paper is to provide computable estimates of truncation, discretization, conditioning and roundoff errors introduced by numerical computations. Moreover, we introduce the pseudoaccuracy which will be used by the numerical algorithm in order to provide uniform scaled accuracy of the computed approximation for any x with respect to ex . These estimates are then employed to dynamically truncate the series expansion. In other words, the number of the terms of the series acts like the regularization parameter which provides the trade-off between errors. With the aim to validate the reliability and usability of the algorithm experiments were carried out on several test functions

    A scalable space-time domain decomposition approach for solving large-scale nonlinear regularized inverse ill-posed problems in 4D variational data assimilation

    Full text link
    We develop innovative algorithms for solving the strong-constraint formulation of four-dimensional variational data assimilation in large-scale applications. We present a space-time decomposition approach that employs domain decomposition along both the spatial and temporal directions in the overlapping case and involves partitioning of both the solution and the operators. Starting from the global functional defined on the entire domain, we obtain a type of regularized local functionals on the set of subdomains providing the order reduction of both the predictive and the data assimilation models. We analyze the algorithm convergence and its performance in terms of reduction of time complexity and algorithmic scalability. The numerical experiments are carried out on the shallow water equation on the sphere according to the setup available at the Ocean Synthesis/Reanalysis Directory provided by Hamburg University.Comment: Received: 10 March 2020 / Revised: 29 November 2021 / Accepted: 7 March 202

    Parallel framework for dynamic domain decomposition of data assimilation problems: a case study on Kalman Filter algorithm

    Get PDF
    We focus on Partial Differential Equation (PDE)‐based Data Assimilation problems (DA) solved by means of variational approaches and Kalman filter algorithm. Recently, we presented a Domain Decomposition framework (we call it DD‐DA, for short) performing a decomposition of the whole physical domain along space and time directions, and joining the idea of Schwarz's methods and parallel in time approaches. For effective parallelization of DD‐DA algorithms, the computational load assigned to subdomains must be equally distributed. Usually computational cost is proportional to the amount of data entities assigned to partitions. Good quality partitioning also requires the volume of communication during calculation to be kept at its minimum. In order to deal with DD‐DA problems where the observations are nonuniformly distributed and general sparse, in the present work we employ a parallel load balancing algorithm based on adaptive and dynamic defining of boundaries of DD—which is aimed to balance workload according to data location. We call it DyDD. As the numerical model underlying DA problems arising from the so‐called discretize‐then‐optimize approach is the constrained least square model (CLS), we will use CLS as a reference state estimation problem and we validate DyDD on different scenario

    A scalable approach for Variational Data Assimilation

    Get PDF
    Data assimilation (DA) is a methodology for combining mathematical models simulating complex systems (the background knowledge) and measurements (the reality or observational data) in order to improve the estimate of the system state (the forecast). The DA is an inverse and ill posed problem usually used to handle a huge amount of data, so, it is a large and computationally expensive problem. Here we focus on scalable methods that makes DA applications feasible for a huge number of background data and observations. We present a scalable algorithm for solving variational DA which is highly parallel. We provide a mathematical formalization of this approach and we also study the performance of the resulted algorith

    Driving NEMO Towards Exascale: Introduction of a New Software Layer in the NEMO Stack Software

    Get PDF
    This paper addresses scientific challenges related to high level implementation strategies that leads NEMO to effectively use of the opportunities of exascale systems. We consider two software modules as proof-of-concept: the Sea Surface Height equation solver and the Variational Data Assimilation system, which are components of the NEMO ocean model (OPA). Advantages rising from the introduction of consolidated scientific libraries in NEMO are highlighted: such advantages concern both the "software quality" improvement (see the software quality parameters like robustness, portability, resilence, etc.) and time reduction of software development

    Numerical Solution of Diffusion Models in Biomedical Imaging on Multicore Processors

    Get PDF
    In this paper, we consider nonlinear partial differential equations (PDEs) of diffusion/advection type underlying most problems in image analysis. As case study, we address the segmentation of medical structures. We perform a comparative study of numerical algorithms arising from using the semi-implicit and the fully implicit discretization schemes. Comparison criteria take into account both the accuracy and the efficiency of the algorithms. As measure of accuracy, we consider the Hausdorff distance and the residuals of numerical solvers, while as measure of efficiency we consider convergence history, execution time, speedup, and parallel efficiency. This analysis is carried out in a multicore-based parallel computing environment
    corecore